Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
1.
Pharmacoeconomics ; 42(5): 479-486, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38583100

RESUMO

Value of Information (VOI) analyses calculate the economic value that could be generated by obtaining further information to reduce uncertainty in a health economic decision model. VOI has been suggested as a tool for research prioritisation and trial design as it can highlight economically valuable avenues for future research. Recent methodological advances have made it increasingly feasible to use VOI in practice for research; however, there are critical differences between the VOI approach and the standard methods used to design research studies such as clinical trials. We aimed to highlight key differences between the research design approach based on VOI and standard clinical trial design methods, in particular the importance of considering the full decision context. We present two hypothetical examples to demonstrate that VOI methods are only accurate when (1) all feasible comparators are included in the decision model when designing research, and (2) all comparators are retained in the decision model once the data have been collected and a final treatment recommendation is made. Omitting comparators from either the design or analysis phase of research when using VOI methods can lead to incorrect trial designs and/or treatment recommendations. Overall, we conclude that incorrectly specifying the health economic model by ignoring potential comparators can lead to misleading VOI results and potentially waste scarce research resources.


Assuntos
Ensaios Clínicos como Assunto , Técnicas de Apoio para a Decisão , Modelos Econômicos , Projetos de Pesquisa , Humanos , Ensaios Clínicos como Assunto/economia , Ensaios Clínicos como Assunto/métodos , Análise Custo-Benefício , Incerteza , Tomada de Decisões
2.
J Am Coll Cardiol ; 81(2): 156-168, 2023 01 17.
Artigo em Inglês | MEDLINE | ID: mdl-36631210

RESUMO

BACKGROUND: Despite poor cardiovascular outcomes, there are no dedicated, validated risk stratification tools to guide investigation or treatment in type 2 myocardial infarction. OBJECTIVES: The goal of this study was to derive and validate a risk stratification tool for the prediction of death or future myocardial infarction in patients with type 2 myocardial infarction. METHODS: The T2-risk score was developed in a prospective multicenter cohort of consecutive patients with type 2 myocardial infarction. Cox proportional hazards models were constructed for the primary outcome of myocardial infarction or death at 1 year using variables selected a priori based on clinical importance. Discrimination was assessed by area under the receiving-operating characteristic curve (AUC). Calibration was investigated graphically. The tool was validated in a single-center cohort of consecutive patients and in a multicenter cohort study from sites across Europe. RESULTS: There were 1,121, 250, and 253 patients in the derivation, single-center, and multicenter validation cohorts, with the primary outcome occurring in 27% (297 of 1,121), 26% (66 of 250), and 14% (35 of 253) of patients, respectively. The T2-risk score incorporating age, ischemic heart disease, heart failure, diabetes mellitus, myocardial ischemia on electrocardiogram, heart rate, anemia, estimated glomerular filtration rate, and maximal cardiac troponin concentration had good discrimination (AUC: 0.76; 95% CI: 0.73-0.79) for the primary outcome and was well calibrated. Discrimination was similar in the consecutive patient (AUC: 0.83; 95% CI: 0.77-0.88) and multicenter (AUC: 0.74; 95% CI: 0.64-0.83) cohorts. T2-risk provided improved discrimination over the Global Registry of Acute Coronary Events 2.0 risk score in all cohorts. CONCLUSIONS: The T2-risk score performed well in different health care settings and could help clinicians to prognosticate, as well as target investigation and preventative therapies more effectively. (High-Sensitivity Troponin in the Evaluation of Patients With Suspected Acute Coronary Syndrome [High-STEACS]; NCT01852123).


Assuntos
Infarto Miocárdico de Parede Anterior , Diabetes Mellitus Tipo 2 , Infarto do Miocárdio , Humanos , Medição de Risco , Estudos de Coortes , Estudos Prospectivos , Prognóstico , Valor Preditivo dos Testes , Troponina I , Infarto do Miocárdio/diagnóstico , Infarto do Miocárdio/epidemiologia , Infarto do Miocárdio/tratamento farmacológico , Fatores de Risco
3.
Ecol Lett ; 23(2): 305-315, 2020 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-31762170

RESUMO

Geographic isolation substantially contributes to species endemism on oceanic islands when speciation involves the colonisation of a new island. However, less is understood about the drivers of speciation within islands. What is lacking is a general understanding of the geographic scale of gene flow limitation within islands, and thus the spatial scale and drivers of geographical speciation within insular contexts. Using a community of beetle species, we show that when dispersal ability and climate tolerance are restricted, microclimatic variation over distances of only a few kilometres can maintain strong geographic isolation extending back several millions of years. Further to this, we demonstrate congruent diversification with gene flow across species, mediated by Quaternary climate oscillations that have facilitated a dynamic of isolation and secondary contact. The unprecedented scale of parallel species responses to a common environmental driver for evolutionary change has profound consequences for understanding past and future species responses to climate variation.


Assuntos
Evolução Biológica , Clima , Fluxo Gênico , Especiação Genética , Geografia , Ilhas , Oceanos e Mares , Filogenia
4.
Med Decis Making ; 39(4): 346-358, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-31161867

RESUMO

Background. The expected value of sample information (EVSI) determines the economic value of any future study with a specific design aimed at reducing uncertainty about the parameters underlying a health economic model. This has potential as a tool for trial design; the cost and value of different designs could be compared to find the trial with the greatest net benefit. However, despite recent developments, EVSI analysis can be slow, especially when optimizing over a large number of different designs. Methods. This article develops a method to reduce the computation time required to calculate the EVSI across different sample sizes. Our method extends the moment-matching approach to EVSI estimation to optimize over different sample sizes for the underlying trial while retaining a similar computational cost to a single EVSI estimate. This extension calculates the posterior variance of the net monetary benefit across alternative sample sizes and then uses Bayesian nonlinear regression to estimate the EVSI across these sample sizes. Results. A health economic model developed to assess the cost-effectiveness of interventions for chronic pain demonstrates that this EVSI calculation method is fast and accurate for realistic models. This example also highlights how different trial designs can be compared using the EVSI. Conclusion. The proposed estimation method is fast and accurate when calculating the EVSI across different sample sizes. This will allow researchers to realize the potential of using the EVSI to determine an economically optimal trial design for reducing uncertainty in health economic models. Limitations. Our method involves rerunning the health economic model, which can be more computationally expensive than some recent alternatives, especially in complex models.


Assuntos
Confiabilidade dos Dados , Dinâmica não Linear , Tamanho da Amostra , Teorema de Bayes , Humanos , Estatística como Assunto/métodos
5.
Med Decis Making ; 38(2): 163-173, 2018 02.
Artigo em Inglês | MEDLINE | ID: mdl-29126364

RESUMO

BACKGROUND: The Expected Value of Sample Information (EVSI) is used to calculate the economic value of a new research strategy. Although this value would be important to both researchers and funders, there are very few practical applications of the EVSI. This is due to computational difficulties associated with calculating the EVSI in practical health economic models using nested simulations. METHODS: We present an approximation method for the EVSI that is framed in a Bayesian setting and is based on estimating the distribution of the posterior mean of the incremental net benefit across all possible future samples, known as the distribution of the preposterior mean. Specifically, this distribution is estimated using moment matching coupled with simulations that are available for probabilistic sensitivity analysis, which is typically mandatory in health economic evaluations. RESULTS: This novel approximation method is applied to a health economic model that has previously been used to assess the performance of other EVSI estimators and accurately estimates the EVSI. The computational time for this method is competitive with other methods. CONCLUSION: We have developed a new calculation method for the EVSI which is computationally efficient and accurate. LIMITATIONS: This novel method relies on some additional simulation so can be expensive in models with a large computational cost.


Assuntos
Técnicas de Apoio para a Decisão , Modelos Econômicos , Método de Monte Carlo , Algoritmos , Análise Custo-Benefício , Economia Médica
6.
Med Decis Making ; 37(7): 747-758, 2017 10.
Artigo em Inglês | MEDLINE | ID: mdl-28410564

RESUMO

In recent years, value-of-information analysis has become more widespread in health economic evaluations, specifically as a tool to guide further research and perform probabilistic sensitivity analysis. This is partly due to methodological advancements allowing for the fast computation of a typical summary known as the expected value of partial perfect information (EVPPI). A recent review discussed some approximation methods for calculating the EVPPI, but as the research has been active over the intervening years, that review does not discuss some key estimation methods. Therefore, this paper presents a comprehensive review of these new methods. We begin by providing the technical details of these computation methods. We then present two case studies in order to compare the estimation performance of these new methods. We conclude that a method based on nonparametric regression offers the best method for calculating the EVPPI in terms of accuracy, computational time, and ease of implementation. This means that the EVPPI can now be used practically in health economic evaluations, especially as all the methods are developed in parallel with R functions and a web app to aid practitioners.


Assuntos
Análise Custo-Benefício/métodos , Análise de Regressão , Estatísticas não Paramétricas , Algoritmos , Técnicas de Apoio para a Decisão , Humanos , Vacinas contra Influenza/economia , Malária/economia , Modelos Econômicos , Método de Monte Carlo
7.
Stat Med ; 35(23): 4264-80, 2016 10 15.
Artigo em Inglês | MEDLINE | ID: mdl-27189534

RESUMO

The Expected Value of Perfect Partial Information (EVPPI) is a decision-theoretic measure of the 'cost' of parametric uncertainty in decision making used principally in health economic decision making. Despite this decision-theoretic grounding, the uptake of EVPPI calculations in practice has been slow. This is in part due to the prohibitive computational time required to estimate the EVPPI via Monte Carlo simulations. However, recent developments have demonstrated that the EVPPI can be estimated by non-parametric regression methods, which have significantly decreased the computation time required to approximate the EVPPI. Under certain circumstances, high-dimensional Gaussian Process (GP) regression is suggested, but this can still be prohibitively expensive. Applying fast computation methods developed in spatial statistics using Integrated Nested Laplace Approximations (INLA) and projecting from a high-dimensional into a low-dimensional input space allows us to decrease the computation time for fitting these high-dimensional GP, often substantially. We demonstrate that the EVPPI calculated using our method for GP regression is in line with the standard GP regression method and that despite the apparent methodological complexity of this new method, R functions are available in the package BCEA to implement it simply and efficiently. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.


Assuntos
Método de Monte Carlo , Incerteza , Análise Custo-Benefício , Tomada de Decisões , Humanos , Análise de Regressão
8.
Comput Stat Data Anal ; 56(12): 3809-3820, 2012 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-22754052

RESUMO

A primary challenge in unsupervised clustering using mixture models is the selection of a family of basis distributions flexible enough to succinctly represent the distributions of the target subpopulations. In this paper we introduce a new family of Gaussian Well distributions (GWDs) for clustering applications where the target subpopulations are characterized by hollow [hyper-]elliptical structures. We develop the primary theory pertaining to the GWD, including mixtures of GWDs, selection of prior distributions, and computationally efficient inference strategies using Markov chain Monte Carlo. We demonstrate the utility of our approach, as compared to standard Gaussian mixture methods on a synthetic dataset, and exemplify its applicability on an example from immunofluorescence imaging, emphasizing the improved interpretability and parsimony of the GWD-based model.

9.
J Comput Biol ; 19(6): 745-55, 2012 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-22697244

RESUMO

Phylogeographic ancestral inference is issue frequently arising in population ecology that aims to understand the geographical roots and structure of species. Here, we specifically address relatively small scale mtDNA datasets (typically less than 500 sequences with fewer than 1000 nucleotides), focusing on ancestral location inference. Our approach uses a coalescent modelling framework projected onto haplotype trees in order to reduce computational complexity, at the same time adhering to complex evolutionary processes. Statistical innovations of the last few years have allowed for computationally feasible yet accurate inferences in phylogenetic frameworks. We implement our methods on a set of synthetic datasets and show how, despite high uncertainty in terms of identifying the root haplotype, estimation of the ancestral location naturally encompasses lower uncertainty, allowing us to pinpoint the Maximum A Posteriori estimates for ancestral locations. We exemplify our methods on a set of synthetic datasets and then combine our inference methods with the phylogeographic clustering approach presented in Manolopoulou et al. (2011) on a real dataset from weevils in the Iberian peninsula in order to infer ancestral locations as well as population substructure.


Assuntos
DNA Mitocondrial/genética , Especiação Genética , Haplótipos , Modelos Genéticos , Gorgulhos/genética , Animais , DNA Mitocondrial/classificação , Bases de Dados de Ácidos Nucleicos , Evolução Molecular , Filogenia , Filogeografia , Portugal , Espanha , Gorgulhos/classificação
10.
Interface Focus ; 1(6): 909-21, 2011 Dec 06.
Artigo em Inglês | MEDLINE | ID: mdl-23226589

RESUMO

Phylogeographic methods have attracted a lot of attention in recent years, stressing the need to provide a solid statistical framework for many existing methodologies so as to draw statistically reliable inferences. Here, we take a flexible fully Bayesian approach by reducing the problem to a clustering framework, whereby the population distribution can be explained by a set of migrations, forming geographically stable population clusters. These clusters are such that they are consistent with a fixed number of migrations on the corresponding (unknown) subdivided coalescent tree. Our methods rely upon a clustered population distribution, and allow for inclusion of various covariates (such as phenotype or climate information) at little additional computational cost. We illustrate our methods with an example from weevil mitochondrial DNA sequences from the Iberian peninsula.

11.
Bayesian Anal ; 5(3): 1-22, 2010.
Artigo em Inglês | MEDLINE | ID: mdl-20865145

RESUMO

One of the challenges in using Markov chain Monte Carlo for model analysis in studies with very large datasets is the need to scan through the whole data at each iteration of the sampler, which can be computationally prohibitive. Several approaches have been developed to address this, typically drawing computationally manageable subsamples of the data. Here we consider the specific case where most of the data from a mixture model provides little or no information about the parameters of interest, and we aim to select subsamples such that the information extracted is most relevant. The motivating application arises in flow cytometry, where several measurements from a vast number of cells are available. Interest lies in identifying specific rare cell subtypes and characterizing them according to their corresponding markers. We present a Markov chain Monte Carlo approach where an initial subsample of the full dataset is used to guide selection sampling of a further set of observations targeted at a scientifically interesting, low probability region. We define a Sequential Monte Carlo strategy in which the targeted subsample is augmented sequentially as estimates improve, and introduce a stopping rule for determining the size of the targeted subsample. An example from flow cytometry illustrates the ability of the approach to increase the resolution of inferences for rare cell subtypes.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA